12 research outputs found

    Accountable infrastructure and its impact on internet security and privacy

    Get PDF
    The Internet infrastructure relies on the correct functioning of the basic underlying protocols, which were designed for functionality. Security and privacy have been added post hoc, mostly by applying cryptographic means to different layers of communication. In the absence of accountability, as a fundamental property, the Internet infrastructure does not have a built-in ability to associate an action with the responsible entity, neither to detect or prevent misbehavior. In this thesis, we study accountability from a few different perspectives. First, we study the need of having accountability in anonymous communication networks as a mechanism that provides repudiation for the proxy nodes by tracing back selected outbound traffic in a provable manner. Second, we design a framework that provides a foundation to support the enforcement of the right to be forgotten law in a scalable and automated manner. The framework provides a technical mean for the users to prove their eligibility for content removal from the search results. Third, we analyze the Internet infrastructure determining potential security risks and threats imposed by dependencies among the entities on the Internet. Finally, we evaluate the feasibility of using hop count filtering as a mechanism for mitigating Distributed Reflective Denial-of-Service attacks, and conceptually show that it cannot work to prevent these attacks.Die Internet-Infrastrutur stützt sich auf die korrekte Ausführung zugrundeliegender Protokolle, welche mit Fokus auf Funktionalität entwickelt wurden. Sicherheit und Datenschutz wurden nachträglich hinzugefügt, hauptsächlich durch die Anwendung kryptografischer Methoden in verschiedenen Schichten des Protokollstacks. Fehlende Zurechenbarkeit, eine fundamentale Eigenschaft Handlungen mit deren Verantwortlichen in Verbindung zu bringen, verhindert jedoch, Fehlverhalten zu erkennen und zu unterbinden. Diese Dissertation betrachtet die Zurechenbarkeit im Internet aus verschiedenen Blickwinkeln. Zuerst untersuchen wir die Notwendigkeit für Zurechenbarkeit in anonymisierten Kommunikationsnetzen um es Proxyknoten zu erlauben Fehlverhalten beweisbar auf den eigentlichen Verursacher zurückzuverfolgen. Zweitens entwerfen wir ein Framework, das die skalierbare und automatisierte Umsetzung des Rechts auf Vergessenwerden unterstützt. Unser Framework bietet Benutzern die technische Möglichkeit, ihre Berechtigung für die Entfernung von Suchergebnissen nachzuweisen. Drittens analysieren wir die Internet-Infrastruktur, um mögliche Sicherheitsrisiken und Bedrohungen aufgrund von Abhängigkeiten zwischen den verschiedenen beteiligten Entitäten zu bestimmen. Letztlich evaluieren wir die Umsetzbarkeit von Hop Count Filtering als ein Instrument DRDoS Angriffe abzuschwächen und wir zeigen, dass dieses Instrument diese Art der Angriffe konzeptionell nicht verhindern kann

    Introducing Accountability to Anonymity Networks

    Full text link
    Many anonymous communication (AC) networks rely on routing traffic through proxy nodes to obfuscate the originator of the traffic. Without an accountability mechanism, exit proxy nodes risk sanctions by law enforcement if users commit illegal actions through the AC network. We present BackRef, a generic mechanism for AC networks that provides practical repudiation for the proxy nodes by tracing back the selected outbound traffic to the predecessor node (but not in the forward direction) through a cryptographically verifiable chain. It also provides an option for full (or partial) traceability back to the entry node or even to the corresponding user when all intermediate nodes are cooperating. Moreover, to maintain a good balance between anonymity and accountability, the protocol incorporates whitelist directories at exit proxy nodes. BackRef offers improved deployability over the related work, and introduces a novel concept of pseudonymous signatures that may be of independent interest. We exemplify the utility of BackRef by integrating it into the onion routing (OR) protocol, and examine its deployability by considering several system-level aspects. We also present the security definitions for the BackRef system (namely, anonymity, backward traceability, no forward traceability, and no false accusation) and conduct a formal security analysis of the OR protocol with BackRef using ProVerif, an automated cryptographic protocol verifier, establishing the aforementioned security properties against a strong adversarial model

    Who Controls the Internet? Analyzing Global Threats using Property Graph Traversals

    Get PDF
    The Internet is built on top of intertwined network services, e.g., email, DNS, and content distribution networks operated by private or governmental organizations. Recent events have shown that these organizations may, knowingly or unknowingly, be part of global-scale security incidents including state-sponsored mass surveillance programs and large-scale DDoS attacks. For example, in March 2015 the Great Cannon attack has shown that an Internet service provider can weaponize millions of Web browsers and turn them into DDoS bots by injecting malicious JavaScript code into transiting TCP connections. While attack techniques and root cause vulnerabilities are routinely studied, we still lack models and algorithms to study the intricate dependencies between services and providers, reason on their abuse, and assess the attack impact. To close this gap, we present a technique that models services, providers, and dependencies as a property graph. Moreover, we present a taint-style propagation-based technique to query the model, and present an evaluation of our framework on the top 100k Alexa domains

    A Survey on Routing in Anonymous Communication Protocols

    Get PDF
    The Internet has undergone dramatic changes in the past 2 decades and now forms a global communication platform that billions of users rely on for their daily activities. While this transformation has brought tremendous benefits to society, it has also created new threats to online privacy, such as omnipotent governmental surveillance. As a result, public interest in systems for anonymous communication has drastically increased. In this work, we survey previous research on designing, developing, and deploying systems for anonymous communication. Our taxonomy and comparative assessment provide important insights about the differences between the existing classes of anonymous communication protocols

    Pareto-Optimal Defenses for the Web Infrastructure: Theory and Practice

    Get PDF
    The integrity of the content a user is exposed to when browsing the web relies on a plethora of non-web technologies and an infrastructure of interdependent hosts, communication technologies, and trust relations. Incidents like the Chinese Great Cannon or the MyEtherWallet attack make it painfully clear: the security of end users hinges on the security of the surrounding infrastructure: routing, DNS, content delivery, and the PKI. There are many competing, but isolated proposals to increase security, from the network up to the application layer. So far, researchers have focus on analyzing attacks and defenses on specific layers. We still lack an evaluation of how, given the status quo of the web, these proposals can be combined, how effective they are, and at what cost the increase of security comes. In this work, we propose a graph-based analysis based on Stackelberg planning that considers a rich attacker model and a multitude of proposals from IPsec to DNSSEC and SRI. Our threat model considers the security of billions of users against attackers ranging from small hacker groups to nation-state actors. Analyzing the infrastructure of the Top 5k Alexa domains, we discover that the security mechanisms currently deployed are ineffective and that some infrastructure providers have a comparable threat potential to nations. We find a considerable increase of security (up to 13% protected web visits) is possible at relatively modest cost, due to the effectiveness of mitigations at the application and transport layer, which dominate expensive infrastructure enhancements such as DNSSEC and IPsec

    Formally Reasoning about the Cost and Efficacy of Securing the Email Infrastructure (full version)

    Get PDF
    Security in the Internet has historically been added post-hoc, leaving services like email, which, after all, is used by 3.7 billion users, vulnerable to large-scale surveillance. For email alone, there is a multitude of proposals to mitigate known vulnerabilities, ranging from the introduction of completely new protocols to modifications of the communication paths used by big providers. Deciding which measures to deploy requires a deep understanding of the induced benefits, the cost and the resulting effects. This paper proposes the first automated methodology for making formal deployment assessments. Our planning algorithm analyses the impact and cost-efficiency of different known mitigation strategies against an attacker in a formal threat model. This novel formalisation of an infrastructure attacker includes routing, name resolution and application level weaknesses. We apply the methodology to a large-scale scan of the Internet, and assess how protocols like IPsec, DNSSEC, DANE, SMTP over TLS and other mitigation techniques like server relocation can be combined to improve the confidentiality of email users in 45 combinations of attacker and defender countries and nine cost scenarios. This is the first deployment analysis for mitigation techniques at this scale

    Formally Reasoning about the Cost and Efficacy of Securing the Email Infrastructure

    Get PDF
    Security in the Internet has historically been added post-hoc, leaving services like email, which, after all, is used by 3.7 billion users, vulnerable to large-scale surveillance. For email alone, there is a multitude of proposals to mitigate known vulnerabilities, ranging from the introduction of completely new protocols to modifications of the communication paths used by big providers. Deciding which measures to deploy requires a deep understanding of the induced benefits, the cost and the resulting effects. This paper proposes the first automated methodology for making formal deployment assessments. Our planning algorithm analyses the impact and cost-efficiency of different known mitigation strategies against an attacker in a formal threat model. This novel formalisation of an infrastructure attacker includes routing, name resolution and application level weaknesses. We apply the methodology to a large-scale scan of the Internet, and assess how protocols like IPsec, DNSSEC, DANE, SMTP STS, SMTP over TLS and other mitigation techniques like server relocation can be combined to improve the confidentiality of email users in 45 combinations of attacker and defender countries and nine cost scenarios. This is the first deployment analysis for mitigation techniques at this scale

    Poster: Quasi-ID: In fact, I am a human

    No full text
    CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) are the dominantly used turing tests to protect websites against bots that are impersonating human users to gain access to various types of services. The test is designed in a way to be very difficult for robotic programs, but comfortably easy for humans. As artificial intelligence research thrives towards the biggest challenge of the field — simulating the work of a human brain — the complexity of CAPTCHA tests increases, making it more and more difficult for humans to answer the tests. The problem gets even bigger, with the latest research reports in fact indicating that CAPTCHAs are broken. We present Quasi-ID: a novel approach for determining whether or not a user is a human in a scalable and privacy-preserving manner. Our system utilizes smart devices as ubiquitous input devices for invoking a physical interaction with the users. Such an interaction between the user and his smart device can prove that the user is actually a human. Support for Quasi-ID can be deployed today along with the current CAPTCHA solutions. It does not add additional burden to the web service and requires a non-persistent communication with the Quasi-ID service provider

    PRIMA: Privacy-Preserving Identity and Access Management at Internet-Scale

    No full text
    The management of identities on the Internet has evolved from the traditional approach (where each service provider stores and manages identities) to a federated identity management system (where the identity management is delegated to a set of identity providers). On the one hand, federated identity ensures usability and provides economic benefits to service providers. On the other hand, it poses serious privacy threats to users as well as service providers. The current technology, which is prevalently deployed on the Internet, allows identity providers to track the user's behavior across a broad range of services. In this work, we propose PRIMA, a universal credential-based authentication system for supporting federated identity management in a privacy-preserving manner. Basically, PRIMA does not require any interaction between service providers and identity providers during the authentication process, thus preventing identity providers to profile users' behavior. Moreover, throughout the authentication process, PRIMA provides a mechanism for controlled disclosure of the users' private information. We have conducted comprehensive evaluations of the system to show the feasibility of our approach. Our performance analysis shows that an identity provider can process 1,426 to 3,332 requests per second when the key size is varied from 1024 to 2048-bit, respectively

    A Survey on Routing in Anonymous Communication Protocols

    No full text
    The Internet has undergone dramatic changes in the past 15 years, and now forms a global communication platform that billions of users rely on for their daily activities. While this transformation has brought tremendous benefits to society, it has also created new threats to online privacy, ranging from profiling of users for monetizing personal information to nearly omnipotent governmental surveillance. As a result, public interest in systems for anonymous communication has drastically increased. Several such systems have been proposed in the literature, each of which offers anonymity guarantees in different scenarios and under different assumptions, reflecting the plurality of approaches for how messages can be anonymously routed to their destination. Understanding this space of competing approaches with their different guarantees and assumptions is vital for users to understand the consequences of different design options. In this work, we survey previous research on designing, developing, and deploying systems for anonymous communication. To this end, we provide a taxonomy for clustering all prevalently considered approaches (including Mixnets, DC-nets, onion routing, and DHT-based protocols) with respect to their unique routing characteristics, deployability, and performance. This, in particular, encompasses the topological structure of the underlying network; the routing information that has to be made available to the initiator of the conversation; the underlying communication model; and performance-related indicators such as latency and communication layer. Our taxonomy and comparative assessment provide important insights about the differences between the existing classes of anonymous communication protocols, and it also helps to clarify the relationship between the routing characteristics of these protocols, and their performance and scalability.Comment: 24 pages, 4 tables, 4 figure
    corecore